Learning Word Vectors in Deep Walk using Convolution
نویسندگان
چکیده
Textual queries in networks such as Twitter can have more than one label, resulting in a multi-label classification problem. To reduce computational costs, a low-dimensional representation of a large network is learned that preserves proximity among nodes in the same community. Similar to sequences of words in a sentence, DeepWalk considers sequences of nodes in a shallow graph and clustering is done using hierarchical softmax in an unsupervised manner. In this paper, we generate network abstractions at different levels using deep convolutional neural networks. Since class labels of connected nodes in a network keep changing, we consider a fuzzy recurrent feedback controller to ensure robustness to noise.
منابع مشابه
Semantic retrieval of personal photos using a deep autoencoder fusing visual features with speech annotations represented as word/paragraph vectors
It is very attractive for the user to retrieve photos from a huge collection using high-level personal queries (e.g. “uncle Bill’s house”), but technically very challenging. Previous works proposed a set of approaches toward the goal assuming only 30% of the photos are annotated by sparse spoken descriptions when the photos are taken. In this paper, to promote the interaction between different ...
متن کاملUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Deep neural network (DNN) based natural language processing models rely on a word embedding matrix to transform raw words into vectors. Recently, a deep structured semantic model (DSSM) has been proposed to project raw text to a continuously-valued vector for Web Search. In this technical report, we propose learning word embedding using DSSM. We show that the DSSM trained on large body of text ...
متن کاملLearning Robust Deep Face Representation
With the development of convolution neural network, more and more researchers focus their attention on the advantage of CNN for face recognition task. In this paper, we propose a deep convolution network for learning a robust face representation. The deep convolution net is constructed by 4 convolution layers, 4 max pooling layers and 2 fully connected layers, which totally contains about 4M pa...
متن کاملPermutations as a Means to Encode Order in Word Space
We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word o...
متن کاملDistributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
There are two main approaches to the distributed representation of words: lowdimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributionalmodel vectors – as opposed to one-hot vectors as is standardly done in deep learning. We sh...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017